Anyone else running their whole AI stack as Proxmox LXC containers? Im currently using Open WebUI as front-end, LiteLLM as a router and A vLLM container per mod...
🛡️Capability VMs
Flag this post
My first fifteen compilers (2019)
🔬Nanopasses
Flag this post
TinyML is the most impressive piece of software you can run on any ESP32
xda-developers.com·12h
💬Smalltalk VMs
Flag this post
zFLoRA: Zero-Latency Fused Low-Rank Adapters
arxiv.org·18h
🏁Language Benchmarks
Flag this post
The next RISC-V processor frontier: AI
edn.com·11h
🔧RISC-V
Flag this post
IBM's open source Granite 4.0 Nano AI models are small enough to run locally directly in your browser
venturebeat.com·2d
🔌Microcontrollers
Flag this post
MIT’s Survey On Accelerators and Processors for Inference, With Peak Performance And Power Comparisons
semiengineering.com·5h
🌱Forth Kernels
Flag this post
Vectorizing for Fun and Performance
🔀SIMD Programming
Flag this post
The Wonderful Experience of Mini Computers
💬Smalltalk VMs
Flag this post
Building a Rules Engine from First Principles
towardsdatascience.com·1d
⚖️Inference Rules
Flag this post
Challenging the Fastest OSS Workflow Engine
📡Erlang BEAM
Flag this post
🎲 On LLMs
kaukas.mataroa.blog·13h
🎮Language Ergonomics
Flag this post
CAD-3D on the Atari ST
🐛Interactive Debuggers
Flag this post
From Lossy to Lossless Reasoning
🪜Recursive Descent
Flag this post
Show HN: Fast-posit, sw implementation of posit arithmetic in Rust
🔗Borrowing Extensions
Flag this post
Loading...Loading more...